Awareness of the road scene is an essential component for both autonomousvehicles and Advances Driver Assistance Systems and is gaining importance bothfor the academia and car companies. This paper presents a way to learn asemantic-aware transformation which maps detections from a dashboard cameraview onto a broader bird's eye occupancy map of the scene. To this end, a hugesynthetic dataset featuring 1M couples of frames, taken from both car dashboardand bird's eye view, has been collected and automatically annotated. Adeep-network is then trained to warp detections from the first to the secondview. We demonstrate the effectiveness of our model against several baselinesand observe that is able to generalize on real-world data despite having beentrained solely on synthetic ones.
展开▼